214 research outputs found

    Slow Sphering to Suppress Non-Stationaries in the EEG

    Get PDF
    Non-stationary signals are ubiquitous in electroencephalogram (EEG) signals and pose a problem for robust application of brain-computer interfaces (BCIs). These non-stationarities can be caused by changes in neural background activity. We present a dynamic spatial filter based on time local whitening that significantly reduces the detrimental influence of covariance changes during event-related desynchronization classification of an imaginary movement task

    Detecting Mislabeled Data Using Supervised Machine Learning Techniques

    Get PDF

    Classifying motor imagery in presence of speech

    Get PDF
    In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech

    Towards real-time body pose estimation for presenters in meeting environments

    Get PDF
    This paper describes a computer vision-based approach to body pose estimation.\ud The algorithm can be executed in real-time and processes low resolution,\ud monocular image sequences. A silhouette is extracted and matched against a\ud projection of a 16 DOF human body model. In addition, skin color is used to\ud locate hands and head. No detailed human body model is needed. We evaluate the\ud approach both quantitatively using synthetic image sequences and qualitatively\ud on video test data of short presentations. The algorithm is developed with the\ud aim of using it in the context of a meeting room where the poses of a presenter\ud have to be estimated. The results can be applied in the domain of virtual\ud environments

    Example-based pose estimation in monocular images using compact fourier descriptors

    Get PDF
    Automatically estimating human poses from visual input is useful but challenging due to variations in image space and the high dimensionality of the pose space. In this paper, we assume that a human silhouette can be extracted from monocular visual input. We compare the recovery performance of Fourier descriptors with a number of coefficients between 8 and 128, and two different sampling methods. An examplebased approach is taken to recover upper body poses from the descriptors. We test the robustness of our approach by investigating how shape deformations due to changes in body dimensions, viewpoint and noise affect the recovery of the pose. The average error per joint is approximately 16-17° for equidistant sampling and slightly higher for extreme point sampling. Increasing the number of descriptors does not have any influence on the performance. Noise and small changes in viewpoint have only a very small effect on the recovery performance but we obtain higher error scores when recovering poses using silhouettes from a person with different body dimensions

    Robustness of the Common Spatial Patterns algorithm in the BCI-pipeline

    Get PDF
    When we want to use brain-computer interfaces (BCI) as an input modality for gaming, a short setup procedure is necessary. Therefore a user model has to be learned using small training sets. The common spatial patterns (CSP) algorithm is often used in BCI. In this work we investigate how the CSP algorithm generalizes when using small training sets, how the performance changes over time, and how well CSP generalizes over persons. Our results indicate that the CSP algorithm severely overfits on small training sets. The CSP algorithm often selects a small number of spatial filters that generalize poorly, which can have in impact on the classification performance. The generalization performance does not degrade over time, which is promising, but the signal does not seem to be stationary. In its current form, the CSP generalizes poorly over persons

    Layering techniques for development of parallel systems:An algebraic approach

    Get PDF

    Affective Pacman: A Frustrating Game for Brain-Computer Interface Experiments

    Get PDF

    A POMDP approach to Affective Dialogue Modeling

    Get PDF
    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's affective state and action. A simple example of route navigation is explained to clarify our approach. The preliminary results showed that: (1) the expected return of the optimal dialogue strategy depends on the correlation between the user's affective state & the user's action and (2) the POMDP dialogue strategy outperforms five other dialogue strategies (the random, three handcrafted and greedy action selection strategies)
    • …
    corecore